113 research outputs found

    Parallel Exhaustive Search without Coordination

    Get PDF
    We analyze parallel algorithms in the context of exhaustive search over totally ordered sets. Imagine an infinite list of "boxes", with a "treasure" hidden in one of them, where the boxes' order reflects the importance of finding the treasure in a given box. At each time step, a search protocol executed by a searcher has the ability to peek into one box, and see whether the treasure is present or not. By equally dividing the workload between them, kk searchers can find the treasure kk times faster than one searcher. However, this straightforward strategy is very sensitive to failures (e.g., crashes of processors), and overcoming this issue seems to require a large amount of communication. We therefore address the question of designing parallel search algorithms maximizing their speed-up and maintaining high levels of robustness, while minimizing the amount of resources for coordination. Based on the observation that algorithms that avoid communication are inherently robust, we analyze the best running time performance of non-coordinating algorithms. Specifically, we devise non-coordinating algorithms that achieve a speed-up of 9/89/8 for two searchers, a speed-up of 4/34/3 for three searchers, and in general, a speed-up of k4(1+1/k)2\frac{k}{4}(1+1/k)^2 for any k1k\geq 1 searchers. Thus, asymptotically, the speed-up is only four times worse compared to the case of full-coordination, and our algorithms are surprisingly simple and hence applicable. Moreover, these bounds are tight in a strong sense as no non-coordinating search algorithm can achieve better speed-ups. Overall, we highlight that, in faulty contexts in which coordination between the searchers is technically difficult to implement, intrusive with respect to privacy, and/or costly in term of resources, it might well be worth giving up on coordination, and simply run our non-coordinating exhaustive search algorithms

    Low diameter graph decompositions by approximate distance computation

    Get PDF
    In many models for large-scale computation, decomposition of the problem is key to efficient algorithms. For distance-related graph problems, it is often crucial that such a decomposition results in clusters of small diameter, while the probability that an edge is cut by the decomposition scales linearly with the length of the edge. There is a large body of literature on low diameter graph decomposition with small edge cutting probabilities, with all existing techniques heavily building on single source shortest paths (SSSP) computations. Unfortunately, in many theoretical models for large-scale computations, the SSSP task constitutes a complexity bottleneck. Therefore, it is desirable to replace exact SSSP computations with approximate ones. However this imposes a fundamental challenge since the existing constructions of low diameter graph decomposition with small edge cutting probabilities inherently rely on the subtractive form of the triangle inequality, which fails to hold under distance approximation. The current paper overcomes this obstacle by developing a technique termed blurry ball growing. By combining this technique with a clever algorithmic idea of Miller et al. (SPAA 2013), we obtain a construction of low diameter decompositions with small edge cutting probabilities which replaces exact SSSP computations by (a small number of) approximate ones. The utility of our approach is showcased by deriving efficient algorithms that work in the CONGEST, PRAM, and semi-streaming models of computation. As an application, we obtain metric tree embedding algorithms in the vein of Bartal (FOCS 1996) whose computational complexities in these models are optimal up to polylogarithmic factors. Our embeddings have the additional useful property that the tree can be mapped back to the original graph such that each edge is “used” only logaritmically many times, which is of interest for capacitated problems and simulating CONGEST algorithms on the tree into which the graph is embedded

    Distributed algorithms for low stretch spanning trees

    Get PDF
    Given an undirected graph with integer edge lengths, we study the problem of approximating the distances in the graph by a spanning tree based on the notion of stretch. Our main contribution is a distributed algorithm in the CONGEST model of computation that constructs a random spanning tree with the guarantee that the expected stretch of every edge is O(log3 n), where n is the number of nodes in the graph. If the graph is unweighted, then this algorithm can be implemented to run in O(D) rounds, where D is the hop-diameter of the graph, thus being asymptotically optimal. In the weighted case, the run-time of our algorithm matches the currently best known bound for exact distance computations, i.e., Õ(min{√nD, √nD1/4 + n3/5 + D}). We stress that this is the first distributed construction of spanning trees leading to poly-logarithmic expected stretch with non-trivial running time

    On Online Labeling with Polynomially Many Labels

    Full text link
    In the online labeling problem with parameters n and m we are presented with a sequence of n keys from a totally ordered universe U and must assign each arriving key a label from the label set {1,2,...,m} so that the order of labels (strictly) respects the ordering on U. As new keys arrive it may be necessary to change the labels of some items; such changes may be done at any time at unit cost for each change. The goal is to minimize the total cost. An alternative formulation of this problem is the file maintenance problem, in which the items, instead of being labeled, are maintained in sorted order in an array of length m, and we pay unit cost for moving an item. For the case m=cn for constant c>1, there are known algorithms that use at most O(n log(n)^2) relabelings in total [Itai, Konheim, Rodeh, 1981], and it was shown recently that this is asymptotically optimal [Bul\'anek, Kouck\'y, Saks, 2012]. For the case of m={\Theta}(n^C) for C>1, algorithms are known that use O(n log n) relabelings. A matching lower bound was claimed in [Dietz, Seiferas, Zhang, 2004]. That proof involved two distinct steps: a lower bound for a problem they call prefix bucketing and a reduction from prefix bucketing to online labeling. The reduction seems to be incorrect, leaving a (seemingly significant) gap in the proof. In this paper we close the gap by presenting a correct reduction to prefix bucketing. Furthermore we give a simplified and improved analysis of the prefix bucketing lower bound. This improvement allows us to extend the lower bounds for online labeling to the case where the number m of labels is superpolynomial in n. In particular, for superpolynomial m we get an asymptotically optimal lower bound {\Omega}((n log n) / (log log m - log log n)).Comment: 15 pages, Presented at European Symposium on Algorithms 201

    Node Labels in Local Decision

    Get PDF
    The role of unique node identifiers in network computing is well understood as far as symmetry breaking is concerned. However, the unique identifiers also leak information about the computing environment - in particular, they provide some nodes with information related to the size of the network. It was recently proved that in the context of local decision, there are some decision problems such that (1) they cannot be solved without unique identifiers, and (2) unique node identifiers leak a sufficient amount of information such that the problem becomes solvable (PODC 2013). In this work we give study what is the minimal amount of information that we need to leak from the environment to the nodes in order to solve local decision problems. Our key results are related to scalar oracles ff that, for any given nn, provide a multiset f(n)f(n) of nn labels; then the adversary assigns the labels to the nn nodes in the network. This is a direct generalisation of the usual assumption of unique node identifiers. We give a complete characterisation of the weakest oracle that leaks at least as much information as the unique identifiers. Our main result is the following dichotomy: we classify scalar oracles as large and small, depending on their asymptotic behaviour, and show that (1) any large oracle is at least as powerful as the unique identifiers in the context of local decision problems, while (2) for any small oracle there are local decision problems that still benefit from unique identifiers.Comment: Conference version to appear in the proceedings of SIROCCO 201

    Distributed Deterministic Broadcasting in Uniform-Power Ad Hoc Wireless Networks

    Full text link
    Development of many futuristic technologies, such as MANET, VANET, iThings, nano-devices, depend on efficient distributed communication protocols in multi-hop ad hoc networks. A vast majority of research in this area focus on design heuristic protocols, and analyze their performance by simulations on networks generated randomly or obtained in practical measurements of some (usually small-size) wireless networks. %some library. Moreover, they often assume access to truly random sources, which is often not reasonable in case of wireless devices. In this work we use a formal framework to study the problem of broadcasting and its time complexity in any two dimensional Euclidean wireless network with uniform transmission powers. For the analysis, we consider two popular models of ad hoc networks based on the Signal-to-Interference-and-Noise Ratio (SINR): one with opportunistic links, and the other with randomly disturbed SINR. In the former model, we show that one of our algorithms accomplishes broadcasting in O(Dlog2n)O(D\log^2 n) rounds, where nn is the number of nodes and DD is the diameter of the network. If nodes know a priori the granularity gg of the network, i.e., the inverse of the maximum transmission range over the minimum distance between any two stations, a modification of this algorithm accomplishes broadcasting in O(Dlogg)O(D\log g) rounds. Finally, we modify both algorithms to make them efficient in the latter model with randomly disturbed SINR, with only logarithmic growth of performance. Ours are the first provably efficient and well-scalable, under the two models, distributed deterministic solutions for the broadcast task.Comment: arXiv admin note: substantial text overlap with arXiv:1207.673

    Interval Selection in the Streaming Model

    Full text link
    A set of intervals is independent when the intervals are pairwise disjoint. In the interval selection problem we are given a set I\mathbb{I} of intervals and we want to find an independent subset of intervals of largest cardinality. Let α(I)\alpha(\mathbb{I}) denote the cardinality of an optimal solution. We discuss the estimation of α(I)\alpha(\mathbb{I}) in the streaming model, where we only have one-time, sequential access to the input intervals, the endpoints of the intervals lie in {1,...,n}\{1,...,n \}, and the amount of the memory is constrained. For intervals of different sizes, we provide an algorithm in the data stream model that computes an estimate α^\hat\alpha of α(I)\alpha(\mathbb{I}) that, with probability at least 2/32/3, satisfies 12(1ε)α(I)α^α(I)\tfrac 12(1-\varepsilon) \alpha(\mathbb{I}) \le \hat\alpha \le \alpha(\mathbb{I}). For same-length intervals, we provide another algorithm in the data stream model that computes an estimate α^\hat\alpha of α(I)\alpha(\mathbb{I}) that, with probability at least 2/32/3, satisfies 23(1ε)α(I)α^α(I)\tfrac 23(1-\varepsilon) \alpha(\mathbb{I}) \le \hat\alpha \le \alpha(\mathbb{I}). The space used by our algorithms is bounded by a polynomial in ε1\varepsilon^{-1} and logn\log n. We also show that no better estimations can be achieved using o(n)o(n) bits of storage. We also develop new, approximate solutions to the interval selection problem, where we want to report a feasible solution, that use O(α(I))O(\alpha(\mathbb{I})) space. Our algorithms for the interval selection problem match the optimal results by Emek, Halld{\'o}rsson and Ros{\'e}n [Space-Constrained Interval Selection, ICALP 2012], but are much simpler.Comment: Minor correction

    Semi-Streaming Set Cover

    Full text link
    This paper studies the set cover problem under the semi-streaming model. The underlying set system is formalized in terms of a hypergraph G=(V,E)G = (V, E) whose edges arrive one-by-one and the goal is to construct an edge cover FEF \subseteq E with the objective of minimizing the cardinality (or cost in the weighted case) of FF. We consider a parameterized relaxation of this problem, where given some 0ϵ<10 \leq \epsilon < 1, the goal is to construct an edge (1ϵ)(1 - \epsilon)-cover, namely, a subset of edges incident to all but an ϵ\epsilon-fraction of the vertices (or their benefit in the weighted case). The key limitation imposed on the algorithm is that its space is limited to (poly)logarithmically many bits per vertex. Our main result is an asymptotically tight trade-off between ϵ\epsilon and the approximation ratio: We design a semi-streaming algorithm that on input graph GG, constructs a succinct data structure D\mathcal{D} such that for every 0ϵ<10 \leq \epsilon < 1, an edge (1ϵ)(1 - \epsilon)-cover that approximates the optimal edge \mbox{(11-)cover} within a factor of f(ϵ,n)f(\epsilon, n) can be extracted from D\mathcal{D} (efficiently and with no additional space requirements), where f(ϵ,n)={O(1/ϵ),if ϵ>1/nO(n),otherwise. f(\epsilon, n) = \left\{ \begin{array}{ll} O (1 / \epsilon), & \text{if } \epsilon > 1 / \sqrt{n} \\ O (\sqrt{n}), & \text{otherwise} \end{array} \right. \, . In particular for the traditional set cover problem we obtain an O(n)O(\sqrt{n})-approximation. This algorithm is proved to be best possible by establishing a family (parameterized by ϵ\epsilon) of matching lower bounds.Comment: Full version of the extended abstract that will appear in Proceedings of ICALP 2014 track

    Exploration of Finite 2D Square Grid by a Metamorphic Robotic System

    Full text link
    We consider exploration of finite 2D square grid by a metamorphic robotic system consisting of anonymous oblivious modules. The number of possible shapes of a metamorphic robotic system grows as the number of modules increases. The shape of the system serves as its memory and shows its functionality. We consider the effect of global compass on the minimum number of modules necessary to explore a finite 2D square grid. We show that if the modules agree on the directions (north, south, east, and west), three modules are necessary and sufficient for exploration from an arbitrary initial configuration, otherwise five modules are necessary and sufficient for restricted initial configurations

    Online Makespan Minimization with Parallel Schedules

    Full text link
    In online makespan minimization a sequence of jobs σ=J1,...,Jn\sigma = J_1,..., J_n has to be scheduled on mm identical parallel machines so as to minimize the maximum completion time of any job. We investigate the problem with an essentially new model of resource augmentation. Here, an online algorithm is allowed to build several schedules in parallel while processing σ\sigma. At the end of the scheduling process the best schedule is selected. This model can be viewed as providing an online algorithm with extra space, which is invested to maintain multiple solutions. The setting is of particular interest in parallel processing environments where each processor can maintain a single or a small set of solutions. We develop a (4/3+\eps)-competitive algorithm, for any 0<\eps\leq 1, that uses a number of 1/\eps^{O(\log (1/\eps))} schedules. We also give a (1+\eps)-competitive algorithm, for any 0<\eps\leq 1, that builds a polynomial number of (m/\eps)^{O(\log (1/\eps) / \eps)} schedules. This value depends on mm but is independent of the input σ\sigma. The performance guarantees are nearly best possible. We show that any algorithm that achieves a competitiveness smaller than 4/3 must construct Ω(m)\Omega(m) schedules. Our algorithms make use of novel guessing schemes that (1) predict the optimum makespan of a job sequence σ\sigma to within a factor of 1+\eps and (2) guess the job processing times and their frequencies in σ\sigma. In (2) we have to sparsify the universe of all guesses so as to reduce the number of schedules to a constant. The competitive ratios achieved using parallel schedules are considerably smaller than those in the standard problem without resource augmentation
    corecore